Home |
| Latest | About | Random
# 33 Change of basis: Part 1 Coordinate vectors with respect to an ordered basis. ## Calculating coefficients of a linear combination with respect to a basis set. First a warm up. In $\mathbb{R}^{2}$, a simple choice of basis set is the **standard basis set** $\vec e_{1}=\begin{pmatrix}1\\0\end{pmatrix}, \vec e_{2}=\begin{pmatrix}0\\1\end{pmatrix}$. Recall the if we have a basis set for a linear space, then every vector in this linear space can be written uniquely as a linear combination in the basis vectors. For example, the vector $\vec v=\begin{pmatrix}1\\2\end{pmatrix}$ and $\vec w = \begin{pmatrix}2\\-1\end{pmatrix}$ (arbitrarily chosen) is uniquely the linear combinations $$ \begin{array}{} \vec v = \vec e_{1} + 2 \vec e_{2} & \text{and} & \vec w = 2\vec e_{1} -\vec e_{2}. \end{array} $$ Let us take another basis set of $\mathbb{R}^{2}$, say $\beta = \{\vec b_{1}, \vec b_{2}\}$ where $$ \begin{array}{} \vec b_{1} = \begin{pmatrix}3\\1\end{pmatrix} & \text{and} & \vec b_{2}=\begin{pmatrix}2\\2\end{pmatrix}. \end{array} $$By the way, remember that we can check $\beta = \{\vec b_{1},\vec b_{2}\}$ does indeed form a basis set for $\mathbb{R}^{2}$ by noting that they are linearly independent and spans $\mathbb{R}^{2}$. You can check this by showing you can row reduce the matrix formed by $[\vec b_{1}|\vec b_{2}]$ is invertible (having $n$ pivots). What is $\vec v = \begin{pmatrix}1\\2\end{pmatrix}$ and $\vec w=\begin{pmatrix}2\\-1\end{pmatrix}$ as linear combinations of $\vec b_{1}$ and $\vec b_{2}$? Since this is solving the equation $\vec v=c_{1}\vec b_{1}+c_{2}\vec b_{2}=[\vec b_{1}|\vec b_{2}]\begin{pmatrix}c_{1}\\c_{2}\end{pmatrix}$, we can find the coefficients by performing the row reduction $$ [\ \vec b_{1}\ |\ \vec b_{2}\ \vdots\ \vec v] =\begin{pmatrix}3 & 2 & \vdots & 1 \\ 1 & 2 & \vdots & 2\end{pmatrix} \stackrel{\text{row}}\sim \begin{pmatrix}1 & 0 & \vdots & -\frac{1}{2} \\ 0 & 1 & \vdots & \frac{5}{4}\end{pmatrix}, $$and working out the combination for $\vec w$, we get $$ \begin{array}{} \vec v = -\frac{1}{2}\vec b_{1} + \frac{5}{4}\vec b_{2} \\ \vec w = \frac{3}{2}\vec b_{1} - \frac{5}{4}\vec b_{2} \end{array} $$To make a computational note here: > If $\vec b_{1},\vec b_{2},\ldots,\vec b_{n}$ is a basis for $\mathbb{R}^{n}$, then for any vector $x \in \mathbb{R}^{n}$ we can write $\vec x$ as a linear combination in $\vec x = c_{1}\vec b_{1}+c_{2}\vec b_{2}+\ldots+c_{n}\vec b_{n}$, whose coefficients can be found by row reducing $$ [\ \vec b_{1} \ | \ \vec b_{2} \ |\ \cdots\ |\ \vec b_{n}\ \vdots\ \vec x]\stackrel{\text{row}}\sim \left[\begin{array}{cccc|c} 1 & & & & c_{1}\\ & 1 & & & c_{2}\\ & & \ddots & & \vdots \\ & & & 1& c_{n} \end{array}\right] $$ ## Coordinate vectors with respect to an ordered basis. These coefficients in the linear combination can serve as a kind of coordinates. To which we make the following definitions: > **Definition. Coordinate vector with respect to an ordered basis.** > Let $\beta = (\vec b_{1}, \vec b_{2},\ldots, \vec b_{n})$ be an **ordered basis** for $\mathbb{R}^{n}$, meaning the order of the basis vectors in the list $\beta$ matters. Then we write for any vector $x\in \mathbb{R}^{n}$ its **coordinate vector with respect to ordered basis $\beta$** as $$ [\vec x]_{\beta}=\begin{pmatrix}c_{1}\\c_{2}\\\vdots\\c_{n}\end{pmatrix} $$where $\vec x = c_{1}\vec b_{1}+c_{2}\vec b_{2}+\cdots + c_{n} \vec b_{n}$. So in above example, $$ \begin{array}{} [\vec v]_{\beta}= \begin{pmatrix}-\frac{1}{2}\\ \frac{5}{4}\end{pmatrix} & \text{and} & [\vec w]_{\beta} = \begin{pmatrix} \frac{3}{2} \\ -\frac{5}{4}\end{pmatrix}. \end{array} $$ **Example.** Consider the vector $\vec v =\begin{pmatrix}3\\1\\4\end{pmatrix}$ in $\mathbb{R}^{3}$. If we have the ordered basis $$\beta = ( \begin{pmatrix}1\\1\\1\end{pmatrix}, \begin{pmatrix}1\\1\\0\end{pmatrix},\begin{pmatrix}1\\0\\0\end{pmatrix})$$ for $\mathbb{R}^{3}$, then we have coordinate vector $$ [\vec v]_{\beta} = \begin{pmatrix}4 \\ -3 \\ 2\end{pmatrix}. $$ If instead the ordered basis is $$ \gamma = ( \begin{pmatrix}1\\0\\0\end{pmatrix}, \begin{pmatrix}1\\1\\1\end{pmatrix},\begin{pmatrix}1\\1\\0\end{pmatrix}) $$(which is the same set of vectors in $\beta$ but with a different ordering), then $$ [\vec v]_{\gamma} = \begin{pmatrix}2\\4\\-3\end{pmatrix}. $$So the order matters. $\blacklozenge$ **Example.** If we have some vector $\vec v$ whose coordinate vector with respect to ordered basis $\beta$ is $$ [\vec v]_{\beta} = \begin{pmatrix}1 \\ 2 \\ 3\end{pmatrix} $$where the ordered basis $\beta$ for $\mathbb{R}^{3}$ is $$ \beta = (\begin{pmatrix}1\\1\\1\end{pmatrix},\begin{pmatrix}1\\1\\0\end{pmatrix},\begin{pmatrix}1\\0\\0\end{pmatrix}), $$then the original vector $\vec v$ is $$ \vec v = 1 \begin{pmatrix}1\\1\\1\end{pmatrix}+2\begin{pmatrix}1\\1\\0\end{pmatrix}+3\begin{pmatrix}1\\0\\0\end{pmatrix}=\begin{pmatrix}6\\3\\1\end{pmatrix}. \quad\blacklozenge $$ We make some observations here. > **Observation 1.** > If we denote $\text{std} = (\vec e_{1},\vec e_{2},\ldots,\vec e_{n})$ to be the standard ordered basis for $\mathbb{R}^{n}$, then any vector $\vec x \in \mathbb{R}^{n}$ has $\vec x = [\vec x]_{\text{std}}$. And, > **Observation 2.** > If we denote $\beta=(\vec b_{1},\vec b_{2},\ldots,\vec b_{n})$ to be some ordered basis for $\mathbb{R}^{n}$, and we write $P_{\beta}$ to be the $n\times n$ matrix where we put the vectors of $\beta$ as columns in the order given, $$ P_{\beta} = [\ \vec b_{1}\ | \ \vec b_{2}\ | \cdots | \ \vec b_{n}\ ] $$then we have for any $\vec x \in \mathbb{R}^{n}$, $$ \vec x = P_{\beta} [\vec x]_{\beta}. $$And since $P_{\beta}$ is an invertible matrix, we have $$ [\vec x]_{\beta}=P_{\beta}^{-1}\vec x $$ ## Thinking diagrammatically. The action of taking a vector $\vec x$ and writing its coordinate vector $[\vec x]_{\beta}$ with respect to some fixed ordered basis $\beta$ for $\mathbb{R}^{n}$ is a mapping, $$ \begin{array}{rccl} [\,\cdot \,]_{\beta}: & \mathbb{R^{n}} & \to & \mathbb{R}^{n} \\ & \vec x & \mapsto & [\vec x]_{\beta} = P^{-1}_{\beta}\vec x \end{array} $$And in fact this is an **invertible linear map**. We can see that it is linear because it is governed by a left-matrix multiplication! Diagrammatically, $$ \begin{array}{} \underset{(\text{std})}{\mathbb{R}^{n}} & \xrightarrow{P^{-1}_{\beta}} & \underset{(\beta)}{\mathbb{R}^{n}} \\ \vec x & \mapsto & [\vec x]_{\beta} \end{array} $$and to go back we have $$ \begin{array}{} \underset{(\text{std})}{\mathbb{R}^{n}} & \xleftarrow{P_{\beta}} & \underset{(\beta)}{\mathbb{R}^{n}} \\ \vec x & ⟻& [\vec x]_{\beta} \end{array} $$ This **diagrammatic** thinking can help us analyze how to related different quantities. And this is in fact a powerful tool in algebra, in particular something called a **commutative diagram**. (In fact, there are parts of algebra that essentially studies diagrams of arrows, called **category theory**, sometimes with an affectionate name of "abstract nonsense". Of course, I am simplifying much of its complexity.) Let us see how this diagrammatic thinking can help us analyze a problem. **Example.** Consider a vector $\vec x \in \mathbb{R}^{2}$ whose coordinate vector with respect to some ordered basis $\beta$ is $$ [\vec x]_{\beta} = \begin{pmatrix}4\\1\end{pmatrix}, $$where $\beta = (\begin{pmatrix}1\\1\end{pmatrix},\begin{pmatrix}2\\1\end{pmatrix})$. Find the coordinate vector $[\vec x]_{\gamma}$ where $\gamma$ is the ordered basis given by $\gamma = (\begin{pmatrix}3\\2\end{pmatrix},\begin{pmatrix}2\\-1\end{pmatrix})$. $\blacktriangleright$ Here we are giving the coordinate vector with respect to basis $\beta$, and we want to write it in terms of basis set $\gamma$. There are couple ways to think about this. If we write $P_{\beta}$ to be the matrix whose columns are the vectors in the ordered basis set $\beta$, and write $P_{\gamma}$ to be the matrix whose columns are the vectors from $\gamma$, then recall we have $$ \vec x = P_{\beta}[\vec x]_{\beta} $$and $$ \vec x = P_{\gamma}[\vec x]_{\gamma}. $$ Since these refer to the same vector $\vec x$, we have equality $$ P_{\beta}[\vec x]_{\beta} = P_{\gamma}[\vec x]_{\gamma} $$which if we multiply both sides by $P_{\gamma}^{-1}$ from the left, we get $$ [\vec x]_{\gamma} = P^{-1}_{\gamma}P_{\beta}[\vec x]_{\beta}. $$ Another way to see it is by diagrams: ![[smc-spring-2024-math-13/linear-algebra-notes/---files/33-change-of-coord-1.svg]] To take coordinate vector $[\vec x]_{\beta}$ back to standard basis representation $\vec x$, we need to multiply by $P_{\beta}$. Then to take vector $\vec x$ to coordinate vector $[\vec x]_{\gamma}$ we need to multiply by $P_{\gamma}^{-1}$. So all together the journey we need to multiply by $P_{\gamma}^{-1}P_{\beta}$ to $[\vec x]_{\beta}$ to turn it into $[\vec x]_{\gamma}$. This type of diagram is called a **commutative diagram**, since following the solid arrows give the same result as following the dashed arrow. We will see more of this in the future. $\blacklozenge$ Bottom-line: You want to think of the relation between a vector $\vec x \in \mathbb{R}^{n}$ and its coordinate vector $[\vec x]_{\beta}$ with respect to some ordered basis $\beta$ as a **re-labeling**, that $\vec x \in \mathbb{R}^{n}$ has a new name $[\vec x]_{\beta}$ when we think of it in the $\beta$-coordinate system.